2 research outputs found

    Automatically detecting open academic review praise and criticism

    Get PDF
    This is an accepted manuscript of an article published by Emerald in Online Information Review on 15 June 2020. The accepted version of the publication may differ from the final published version, accessible at https://doi.org/10.1108/OIR-11-2019-0347.Purpose: Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research’s open peer review publishing platform. Design/methodology/approach: PeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings. Findings: PeerJudge can predict F1000Research judgements from negative evaluations in reviewers’ comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision. Originality/value: PeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments

    Does the use of open, non-anonymous peer review in scholarly publishing introduce bias? Evidence from the F1000Research post-publication open peer review publishing model

    Get PDF
    This is an accepted manuscript of an article published by SAGE in Journal of Information Science on 05/07/2020. The published version can be accessed here: https://doi.org/10.1177/0165551520938678 The accepted version of the publication may differ from the final published version.As part of moves towards open knowledge practices, making peer review open is cited as a way to enable fuller scrutiny and transparency of assessments around research. There are now many flavours of open peer review in use across scholarly publishing, including where reviews are fully attributable and the reviewer is named. This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous (named) peer review – and used in the post-publication, peer review system operated by the open-access scholarly publishing platform F1000Research. First, is there evidence of potential bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of a previous reviewer? Based on over 4 years’ of open peer review data, we found some evidence, albeit weak, that being based in the same country as an author may influence a reviewer’s decision, while there was insufficient evidence to conclude that being able to read an existing published review prior to submitting their review encourages conformity. Thus, whilst immediate publishing of peer review reports appears to be unproblematic, caution may be needed when selecting same-country reviewers in open systems if other studies confirm these results
    corecore